Goto

Collaborating Authors

 export review



Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

"NIPS Neural Information Processing Systems 8-11th December 2014, Montreal, Canada",,, "Paper ID:","592" "Title:","Inferring synaptic conductances from spike trains with a biophysically inspired point process model" Current Reviews First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors propose a conductance based spiking model (CBSM) that is more biophysically realistic than the currently popular generalized linear model (GLM). Furthermore, the authors present CBSM as a generalization of the GLM and propose a set of constraints that can reduce it to a GLM and a GLM variant that would be as adaptive as the CBSM. The proposed model is an interesting extension to current spiking models in that it is parametrized in a more descriptive way of the spiking process without sacrificing much of the mathematical convenience of the GLM. One thing that could raise some concerns stems from the last paragraph of page 6.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper proposes to use a deep convolutional neural network for denoising images by generating a lot of noisy and noiseless image pairs using a synthetic blurring process. The proposed method achieves good results on a number of image deblurring tasks. The idea is simple and elegant. It is observed that a 2D deconvolution, which is an inverse of the convolution operator, is itself a 2D convolution operator, albeit with a very large support.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper introduces a full model for tracking while allowing for multiple and varying number of hypothesis and clutter. It promises a clear notation and fast algorithms through the use of variational/Baum-Welch type inference. Experiments appear extensive and are performed on real-world data. The key novelty of this paper is the assignment problem (aka data association). Tracking itself, as the authors acknowledge, is a well-trodden field.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This work develops a new exact algorithm for structure learning of chordal Markov networks (MN) under decomposable score functions. The algorithm implements a dynamic programming approach by introducing recursive partition tree structures, which are junction tree equivalent structures that well suit the decomposition of the problem into smaller instances so to enable dynamic programming. The authors review the literature, prove the correctness of their algorithm and compare it against a modified version of GOBNILP, which is implements an state-of-the-art method for Bayesian network exact structure learning. The paper is well-written, relevant for NIPS and technically sound.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Message Passing Inference for Large Scale Graphical Models with High Order Potentials The paper follows up on recent work on parallelizing message passing algorithms. The main contribution is dealing with higher order potentials. The main insight is that if large potentials are unary the message they send out is constant. By adding auxiliary factors this can be exploited (Fig 2 gives an example).


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper introduces the idea of deep Gaussian mixture models. A GMM can be seen as consisting of a single isotropic unit norm Gaussian, where each of the components of the mixture consists of applying a different linear transformation to that Gaussian. This idea is extended to the case of a multilayer network, where each node in the network corresponds to a linear transformation, and each route through the network corresponds to a sequence of linear transformations. The number of mixture components is then the number of routes through the network.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The authors present a variational approach L-FIELD to general log-submodular and supermodular distributions. Theoretical contributions include deriving upper and lower bounds on the log-partition function and fully factorized approximate posteriors. The quality of the approximation is tested with respect to the curvature of the function. Empirical results are presented on GMM cuts and MRFs, decomposable functions and facility location modeling.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

"NIPS Neural Information Processing Systems 8-11th December 2014, Montreal, Canada",,, "Paper ID:","1233" "Title:","A Multiplicative Model for Learning Distributed Text-Based Attribute Representations" Current Reviews First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper proposes to incorporate side information for improving vector-space embedding of words via an attribute vector that modulates the word-projection matrices. One could simply think of word-projection tensors (although, in practice the tensors are factorized) where the attribute vector provide the loadings for the tensor slices. This is studied in the context of log-bilinear language models, but the basic idea should be applicable to other word embedding work. The theory part of the paper is very well-written. However, it is in the experimental section that things get somewhat muddier.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper proposes an algorithm for estimating the (\delta,\rho) modes of a distribution. The algorithm approximates the distribution using a tree graphical model, constructed using samples from the original distribution, and finds the modes of the tree graphical model. An algorithm for finding the modes of a tree graphical models is proposed, based on constructing an appropriate junction tree to enforce constraints. The algorithm runs in polynomial time in most parameters but in exponential time to the degree of the tree.